AI is always positive and agreeable. Is that a problem for children?

While “Google’ obviously started as a proper noun, it didn’t take long for it to become a common verb – to google – and be part of everyday language.

Now something slightly different is happening.

We’re not just using AI tools like ChatGPT - we’re interacting with them. Talking to them. In some cases, even naming them.

I’ve heard people refer to “their AI” almost like a helpful assistant or companion. I’ve done it myself. Something personal. Something familiar.

Many people have pet names for their AI’s.

But it raises an interesting question – is that a good thing?

Designed to agree

Artificial intelligence tools like ChatGPT are incredibly impressive.  They can explain complex ideas, generate images, help with homework, and respond instantly to almost any question.

However, it’s always important to remember something.

They are designed to be helpful, polite and - in many cases - agreeable.

They are trained to respond in ways that feel useful and satisfying. To keep the interaction going. To avoid conflict. To be, broadly speaking, positive.

For adults, that can feel efficient and supportive.  For children though, it might be something else entirely.

When “Well Done!” isn’t always helpful

In education, there has been a significant shift over the past couple of decades in how we think about praise.

Research around growth mindset has suggested that praising children purely for outcomes - “That’s brilliant”, “You’re so clever” - is often less effective than recognising effort, strategy and persistence.

What helps children improve is not just being told they’ve done well.  It’s being challenged.

A good teacher doesn’t simply say, “that’s right” … they might ask:

  • Why did you choose that approach?

  • What would happen if you changed this?

  • Can you improve it further?

Learning often sits just beyond comfort. It requires a degree of friction.

The risk of always being supported

AI, by contrast, often removes that friction.  Ask a question, and you’ll get a clear, well-structured answer, generally delivered with great confidence, even when it is wrong!

Complete a task, and you’re likely to receive encouragement.  Refine something, and the AI will happily improve it for you.

That’s incredibly powerful, but it does raise a subtle concern.

If a child’s interaction with AI is always supportive, always positive, always “helpful”… where does the challenge come from?

Because confidence without challenge can be misleading.

A child may feel they are doing well, but may not be stretching their thinking in the way they would in a more demanding learning environment.

When AI becomes personal

This is where the idea of naming AI becomes something I am not entirely comfortable with.

When something has a name, it starts to feel more human.  More like a companion.  More like someone who is on your side.

Again, that’s not necessarily a bad thing.  But if that “someone” is always supportive, always agreeable, always ready with an answer, it can reinforce a particular kind of relationship with a technology that may not always be healthy for a child.

There have also been a small number of widely reported cases where vulnerable individuals - including young people - have formed highly personal interactions with AI systems, sometimes with very concerning consequences. These situations are complex and rare, and it would be far too simplistic to attribute them solely to the technology itself. But they do highlight something important: when AI starts to feel personal, the way it responds - consistently supportive, non-judgemental and always available - can have a deeper impact than we might initially expect, particularly for children who are still developing emotionally.

An experiment at home

In our house, we’ve started experimenting with something we call “hostile critic mode.”

It sounds more dramatic than it is.

But the idea is simple.  Instead of asking AI to help or improve something, we often ask it to be a hostile critic.  To challenge.  To find weaknesses.  To push back.

Rather than saying “improve this paragraph”, we might say “Be a harsh examiner. What is weak about this?” Or, “Don’t give me an answer – ask me a question a good teacher would ask”.

The difference in the response is immediate.  The tone shifts and the AI becomes less reassuring and comforting – but far more useful.

Using AI in a more thoughtful way

This isn’t about avoiding AI.  Far from it.

AI is an incredibly powerful tool, and children will grow up in a world where it is everywhere.

But how they use an AI, and how they see adults using it, does matter.

If it becomes a shortcut to answers, learning may become shallow.

If it becomes a tool for thinking, questioning and refining ideas, it can be transformative and, as parents, we can guide that.

A few small prompt changes can turn AI from a friendly, agreeable chatbot into something far more educational.

  • Ask AI to challenge thinking, not just provide answers

  • Encourage children to explain their reasoning before checking

  • Use AI to ask questions, not just respond to them

  • Explore multiple answers, rather than accepting the first one

In other words, use AI as a thinking partner, not just a question-and-answer machine.

The real question

Artificial intelligence will continue to evolve, becoming more capable, more natural, more embedded in everyday life and the line between ‘tool’ and ‘companion’ may become increasingly blurred.

As I’ve discussed in previous articles, the question isn’t ‘should children be using it’.  They will.

The more important question is this:

Are we helping children use it in a way that develops their thinking — or replaces it?

Because in a world where answers are always available, the real advantage may belong to those who still know how to question, challenge, and think for themselves.

And that’s a skill we shouldn’t let machines take away.

Next
Next

Creators, versus Consumers - The Real Divide in the Age of AI